Goto

Collaborating Authors

 Baltimore County


Social Security Workers Are Being Told to Hand Over Appointment Details to ICE

WIRED

The recent request goes against decades of precedent and puts noncitizens at further risk of immigration enforcement actions. Workers at the Social Security Administration have been told to share information about in-person appointments with agents of Immigration and Customs Enforcement, WIRED has learned. "If ICE comes in and asks if someone has an upcoming appointment, we will let them know the date and time," an employee with direct knowledge of the directive says. They spoke on the condition of anonymity for fear of retaliation. While the majority of appointments with SSA take place over the phone, some appointments still happen in person.


Reports of the Association for the Advancement of Artificial Intelligence's 2025 Fall Symposium Series

Interactive AI Magazine

The Association for the Advancement of Artificial Intelligence's 2025 Fall Symposium Series was held November 6-8, 2025, at the Westin Arlington Gateway in Arlington, Virginia. There were six symposia in the program: AI for Social Good: Emerging Methods, Measures, Data, and Ethics; AI Trustworthiness and Risk Assessment for Challenged Contexts; Engineering Safety-Critical AI Systems; First AAAI Symposium on Quantum Information and Machine Learning: Bridging Quantum Computing and Artificial Intelligence; Safe, Ethical, Certified, Uncertainty-aware, Robust, and Explainable AI for Health; and Unifying Representations for Robot Application Development. This report contains summaries of the symposia, which were submitted by most, but not all, of the symposium organizers. AI has demonstrated transformative potential across sectors such as aging, combating information manipulation, disaster response, education, environmental sustainability, government, healthcare, social care, transportation, and urban planning. Yet, the systematic development of AI For Social Good remains fragmented across those many research communities, with limited convergence around effective methodologies, equitable impact measurement, or access to important data and long-term engagement with targeted populations. The main objective for this symposium was to convene across disciplines and engage researchers, practitioners, and policymakers, with a particular focus on finding methods, measures and data that could be used in multiple settings. There were roughly 30 participants.




Deep Deterministic Nonlinear ICA via Total Correlation Minimization with Matrix-Based Entropy Functional

Li, Qiang, Yu, Shujian, Ma, Liang, Ma, Chen, Liu, Jingyu, Adali, Tulay, Calhoun, Vince D.

arXiv.org Machine Learning

Blind source separation, particularly through independent component analysis (ICA), is widely utilized across various signal processing domains for disentangling underlying components from observed mixed signals, owing to its fully data-driven nature that minimizes reliance on prior assumptions. However, conventional ICA methods rely on an assumption of linear mixing, limiting their ability to capture complex nonlinear relationships and to maintain robustness in noisy environments. In this work, we present deep deterministic nonlinear independent component analysis (DDICA), a novel deep neural network-based framework designed to address these limitations. DDICA leverages a matrix-based entropy function to directly optimize the independence criterion via stochastic gradient descent, bypassing the need for variational approximations or adversarial schemes. This results in a streamlined training process and improved resilience to noise. We validated the effectiveness and generalizability of DDICA across a range of applications, including simulated signal mixtures, hyperspectral image unmixing, modeling of primary visual receptive fields, and resting-state functional magnetic resonance imaging (fMRI) data analysis. Experimental results demonstrate that DDICA effectively separates independent components with high accuracy across a range of applications. These findings suggest that DDICA offers a robust and versatile solution for blind source separation in diverse signal processing tasks.


Extrapolation of Periodic Functions Using Binary Encoding of Continuous Numerical Values

Powell, Brian P., Caraballo-Vega, Jordan A., Carroll, Mark L., Maxwell, Thomas, Ptak, Andrew, Olmschenk, Greg, Martinez-Palomera, Jorge

arXiv.org Machine Learning

We report the discovery that binary encoding allows neural networks to extrapolate periodic functions beyond their training bounds. We introduce Normalized Base-2 Encoding (NB2E) as a method for encoding continuous numerical values and demonstrate that, using this input encoding, vanilla multi-layer perceptrons (MLP) successfully extrapolate diverse periodic signals without prior knowledge of their functional form. Internal activation analysis reveals that NB2E induces bit-phase representations, enabling MLPs to learn and extrapolate signal structure independently of position.


Differentially Private Synthetic Data Generation Using Context-Aware GANs

Kotal, Anantaa, Joshi, Anupam

arXiv.org Artificial Intelligence

The widespread use of big data across sectors has raised major privacy concerns, especially when sensitive information is shared or analyzed. Regulations such as GDPR and HIPAA impose strict controls on data handling, making it difficult to balance the need for insights with privacy requirements. Synthetic data offers a promising solution by creating artificial datasets that reflect real patterns without exposing sensitive information. However, traditional synthetic data methods often fail to capture complex, implicit rules that link different elements of the data and are essential in domains like healthcare. They may reproduce explicit patterns but overlook domain-specific constraints that are not directly stated yet crucial for realism and utility. For example, prescription guidelines that restrict certain medications for specific conditions or prevent harmful drug interactions may not appear explicitly in the original data. Synthetic data generated without these implicit rules can lead to medically inappropriate or unrealistic profiles. To address this gap, we propose ContextGAN, a Context-Aware Differentially Private Generative Adversarial Network that integrates domain-specific rules through a constraint matrix encoding both explicit and implicit knowledge. The constraint-aware discriminator evaluates synthetic data against these rules to ensure adherence to domain constraints, while differential privacy protects sensitive details from the original data. We validate ContextGAN across healthcare, security, and finance, showing that it produces high-quality synthetic data that respects domain rules and preserves privacy. Our results demonstrate that ContextGAN improves realism and utility by enforcing domain constraints, making it suitable for applications that require compliance with both explicit patterns and implicit rules under strict privacy guarantees.


A Unifying Human-Centered AI Fairness Framework

Rahman, Munshi Mahbubur, Pan, Shimei, Foulds, James R.

arXiv.org Artificial Intelligence

The increasing use of Artificial Intelligence (AI) in critical societal domains has amplified concerns about fairness, particularly regarding unequal treatment across sensitive attributes such as race, gender, and socioeconomic status. While there has been substantial work on ensuring AI fairness, navigating trade-offs between competing notions of fairness as well as predictive accuracy remains challenging, creating barriers to the practical deployment of fair AI systems. To address this, we introduce a unifying human-centered fairness framework that systematically covers eight distinct fairness metrics, formed by combining individual and group fairness, infra-marginal and intersectional assumptions, and outcome-based and equality-of-opportunity (EOO) perspectives. This structure allows stakeholders to align fairness interventions with their values and contextual considerations. The framework uses a consistent and easy-to-understand formulation for all metrics to reduce the learning curve for non-experts. Rather than privileging a single fairness notion, the framework enables stakeholders to assign weights across multiple fairness objectives, reflecting their priorities and facilitating multi-stakeholder compromises. We apply this approach to four real-world datasets: the UCI Adult census dataset for income prediction, the COMPAS dataset for criminal recidivism, the German Credit dataset for credit risk assessment, and the MEPS dataset for healthcare utilization. We show that adjusting weights reveals nuanced trade-offs between different fairness metrics. Finally, through case studies in judicial decision-making and healthcare, we demonstrate how the framework can inform practical and value-sensitive deployment of fair AI systems.


Non-Negative Matrix Factorization Using Non-Von Neumann Computers

Borle, Ajinkya, Nicholas, Charles, Chukwu, Uchenna, Miri, Mohammad-Ali, Chancellor, Nicholas

arXiv.org Artificial Intelligence

Non-negative matrix factorization (NMF) is a matrix decomposition problem with applications in unsupervised learning. The general form of this problem (along with many of its variants) is NP-hard in nature. In our work, we explore how this problem could be solved with an energy-based optimization method suitable for certain machines with non-von Neumann architectures. We used the Dirac-3, a device based on the entropy computing paradigm and made by Quantum Computing Inc., to evaluate our approach. Our formulations consist of (i) a quadratic unconstrained binary optimization model (QUBO, suitable for Ising machines) and a quartic formulation that allows for real-valued and integer variables (suitable for machines like the Dirac-3). Although current devices cannot solve large NMF problems, the results of our preliminary experiments are promising enough to warrant further research. For non-negative real matrices, we observed that a fusion approach of first using Dirac-3 and then feeding its results as the initial factor matrices to Scikit-learn's NMF procedure outperforms Scikit-learn's NMF procedure on its own, with default parameters in terms of the error in the reconstructed matrices. For our experiments on non-negative integer matrices, we compared the Dirac-3 device to Google's CP-SAT solver (inside the Or-Tools package) and found that for serial processing, Dirac-3 outperforms CP-SAT in a majority of the cases. We believe that future work in this area might be able to identify domains and variants of the problem where entropy computing (and other non-von Neumann architectures) could offer a clear advantage.


SD-CGAN: Conditional Sinkhorn Divergence GAN for DDoS Anomaly Detection in IoT Networks

Onyeka, Henry, Samson, Emmanuel, Hong, Liang, Islam, Tariqul, Ahmed, Imtiaz, Hasan, Kamrul

arXiv.org Artificial Intelligence

Abstract--The increasing complexity of IoT edge networks presents significant challenges for anomaly detection, particularly in identifying sophisticated Denial-of-Service (DoS) attacks and zero-day exploits under highly dynamic and imbalanced traffic conditions. This paper proposes SD-CGAN, a Conditional Generative Adversarial Network framework enhanced with Sinkhorn Divergence, tailored for robust anomaly detection in IoT edge environments. The framework incorporates CTGAN-based synthetic data augmentation to address class imbalance and leverages Sinkhorn Divergence as a geometry-aware loss function to improve training stability and reduce mode collapse. The model is evaluated on exploitative attack subsets from the CICDDoS2019 dataset and compared against baseline deep learning and GAN-based approaches. Results show that SD-CGAN achieves superior detection accuracy, precision, recall, and F1-score while maintaining computational efficiency suitable for deployment in edge-enabled IoT environments. The evolution of IoT edge networks has enabled ultra-low latency applications such as autonomous vehicles, industrial automation, and mission-critical connected systems.